9 research outputs found

    Real-time lattice boltzmann shallow waters method for breaking wave simulations

    Get PDF
    We present a new approach for the simulation of surfacebased fluids based in a hybrid formulation of Lattice Boltzmann Method for Shallow Waters and particle systems. The modified LBM can handle arbitrary underlying terrain conditions and arbitrary fluid depth. It also introduces a novel method for tracking dry-wet regions and moving boundaries. Dynamic rigid bodies are also included in our simulations using a two-way coupling. Certain features of the simulation that the LBM can not handle because of its heightfield nature, as breaking waves, are detected and automatically turned into splash particles. Here we use a ballistic particle system, but our hybrid method can handle more complex systems as SPH. Both the LBM and particle systems are implemented in CUDA, although dynamic rigid bodies are simulated in CPU. We show the effectiveness of our method with various examples which achieve real-time on consumer-level hardware.Peer ReviewedPostprint (author's final draft

    GPU-Accelerated Large-Eddy Simulation of Turbulent Channel Flows

    Get PDF
    High performance computing clusters that are augmented with cost and power efficient graphics processing unit (GPU) provide new opportunities to broaden the use of large-eddy simulation technique to study high Reynolds number turbulent flows in fluids engineering applications. In this paper, we extend our earlier work on multi-GPU acceleration of an incompressible Navier-Stokes solver to include a large-eddy simulation (LES) capability. In particular, we implement the Lagrangian dynamic subgrid scale model and compare our results against existing direct numerical simulation (DNS) data of a turbulent channel flow at Reτ = 180. Overall, our LES results match fairly well with the DNS data. Our results show that the Reτ = 180 case can be entirely simulated on a single GPU, whereas higher Reynolds cases can benefit from a GPU cluster

    Stepping into fully GPU accelerated biomedical applications

    No full text
    doi: 10.1007/978-3-662-43880-0_1We present ideas and first results on a GPU accelerationof a non-linear solver embedded into the biomedical application codeCARP. The linear system solvers have been transferred already in thepast and so we concentrate on how to extend the GPU acceleration tolarger portions of the code. The finite element assembling of stiffnessand mass matrices takes at least 50% of the CPU time and thereforewe investigate this process for the bidomain equations but with focuson later use in non-linear and/or time-dependent problems. The CUDAcode for matrix calculation and assembling is faster by a factor up to 90compared to a single CPU core. The routines were integrated to CARPsmain code and they are already used to assemble the FE matrices of thebidomain model. Further performance studies are still required for thebidomain-mechanics model
    corecore